The goal of this project is to develop and algorithm for the Robot Clearpath Robotics
Husky such that, given the robot
a target point to reach it will compute the optimal path, with the use of the Global and Local planners,
and it will follow the trajectory to reach the defined goal.
In robotics, Local planners focus on short-term, reactive navigation by adjusting the robot's path based on nearby
obstacles and dynamic changes in the environment. They are fast and handle immediate situations but don't guarantee
reaching a distant goal.
Global planners, on the other hand, compute an optimal or feasible path from the robot's current position to its goal,
considering the entire environment map. They create a long-term plan but rely on local planners to handle real-time
execution and obstacle avoidance.
The path planning will be done in an indoor
environment using a pre-compute global path that is to be refined and used for motion command
computation: the algorithm will implement the Stanley controller computing motion commands for a robot with Ackerman steering
in order to let the robot only perform car-like motions (no in-place rotations).
Lateral & Longitudinal Control
In order to minimize the crosstrack error, the distance from the center of the front axle to the closest point on the global path,
we need to implement the lateral control law.
The lateral control law aims to minimize the heading error (π), defined as the
difference between the tangent angle and the actual direction of the robot;
and the cross-track error, e(t), (red in the Fig).
Since the Global path is just a set of poses, we identify the closes pose as A (in the Fig.) from which a displacement (π« in the Fig) of 15 has been used to retrieve the
poses βA- Ξβ² and βA+ Ξβ² that allow to compute the tangent angle (purple line) to the path.
The goal is to write the computed driving commands (linear velocity and steering angle πΉ(π))
into the topic cmd_vel.
Crosstrack error scheme
To compute them the following control law is used:
Lateral Control law
where: v(t) is the current velocity and K represents the control gain: k determines the influence of the
cross-track error on the steering angle. A higher value makes the robot more responsive to cross-track
errors but may cause oscillations. If the robot oscillates, reduce k. If the robot is too sluggish in
correcting its path, increase it.
I will skip some details that are available on the report published on GitHub .
The primary goal of Longitudinal control is to minimize the error between the actual and reference linear
velocity. The reference velocity vd is set to the maximum allowable speed of the robot.
The velocity error ev is defined as: ππ£ = π£π β π£, where v is the current velocity previously computed.
A Proportional Integral (PI) controller is used to determine the acceleration command a based on the velocity
error. The control law is given by:
Where π²π and π²π are the proportional and integral gains, respectively. The integral term accumulates
the velocity error over time to eliminate steady-state error.
Computing Command Velocity
Using the formula:
\[
\mathbf{v} = v(t) + a \cdot \Delta t
\]
itβs possible to compute the velocity command that minimizes the error between the desired velocity and the actual one. Ξπ‘ represents the time elapsed between one
iteration and the other. Since the frequency for the controller is set to 20Hz the delta time between one iteration and the next one should be ~0.05s.
In the examples above we can appreciate the Planned Trajectory and the actual trajectory
that the Robot follows.
The parameters π²π and π²π are essential components of the PI: the proportional gain determines the
reaction to the current error. It applies a correction based on the present value of the error: a higher
value, increases the system's responsiveness, causing it to react more aggressively
to the error. This can help the system reach the desired state more quickly, but the system was unstable
overshooting the target. With lower values, the system was responding slowly and taking
longer to reach the goal.
The integral gain accumulates the error over time and applies a correction based on the accumulated
error. It helps to eliminate the steady-state error (the persistent difference between the desired and
actual values). Like the other gains, a high value of the integral gain increases the influence of the
accumulated error, which helps to reduce the steady-state error more quickly, but if too high it can
lead to excessive overshooting and instability.
Check for Collisions & Dynamic Obstacles
This task requires keeping the robot from colliding with obstacles in the environment. The idea is to check at each
iteration, if the predicted pose for the next iteration will get in touch with
some forbidden cells of the costmap (the map is stored as a grid of cells, each cell has a value between
FREE_SPACE/OBSTACLE).
To retrieve the predicted pose a transformation is applied, then it's verified that the pose doesn't "touch"
occupied cells.
Having achieved all the task above the robot is able to reach a given goal in the map by planning and
correctly/smoothly follow the planned trajectory. But the robot considers only the obstacles already
present in the map: if we introduce new obstacles (by the simulation in Gazeboo) the robot is not able
to update the cost map and thus plan the trajectory accordingly.
To let it be robust in terms of dynamic obstacle we needed to edit the move base navigation
architecture: we addeed the ObstacleLayerplugin into the Global cost map configuration file in
order
to let the robot detect the obstacle using laser; define the inflation parameters that define how much the
robot can
get close to the obstacle.
By looking at what happens in the simulation when new obstacles are added we noticed that the Planner
notices the obstacle thanks to the laser but doesnβt generate a new plan. This problem was related to
both frequencies of the local/global planner. So I decided to increase the publish frequency up to a value
of 5Hz for the local planner, since itβs the one in charge of making new plans according to the
costmap.
Dynamic obstacle detection
We applied this modification also to the local cost map and we obtained a fully dynamic obstacle-responsive
robot that follows a trajectory to a given goal avoiding obstacles.
Example of Path Planning (without dynamic obstalces)